22 research outputs found

    Usability Analysis of an off-the-shelf Hand Posture Estimation Sensor for Freehand Physical Interaction in Egocentric Mixed Reality

    Get PDF
    This paper explores freehand physical interaction in egocentric Mixed Reality by performing a usability study on the use of hand posture estimation sensors. We report on precision, interactivity and usability metrics in a task-based user study, exploring the importance of additional visual cues when interacting. A total of 750 interactions were recorded from 30 participants performing 5 different interaction tasks (Move, Rotate: Pitch (Y axis) and Yaw (Z axis), Uniform scale: enlarge and shrink). Additional visual cues resulted in an average shorter time to interact, however, no consistent statistical differences were found in between groups for performance and precision results. The group with additional visual cues gave the system and average System Usability Scale (SUS) score of 72.33 (SD = 16.24) while the other scored a 68.0 (SD = 18.68). Overall, additional visual cues made the system being perceived as more usable, despite the fact that the use of these two different conditions had limited effect on precision and interactivity metrics

    Evaluation of Drop Shadows for Virtual Object Grasping in Augmented Reality

    Get PDF
    This paper presents the use of rendered visual cues as drop shadows and their impact on overall usability and accuracy of grasping interactions for monitor-based exocentric Augmented Reality (AR). We report on two conditions; grasping with drop shadows and without drop shadows and analyse a total of 1620 grasps of two virtual object types (cubes and spheres). We report on the accuracy of one grasp type, the Medium Wrap grasp, against Grasp Aperture (GAp), Grasp Displacement (GDisp), completion time and usability metrics from 30 participants. A comprehensive statistical analysis of the results is presented giving comparisons of the inclusion of drop shadows in AR grasping. Findings showed that the use of drop shadows increases usability of AR grasping while significantly decreasing task completion times. Furthermore, drop shadows also significantly improve user’s depth estimation of AR object position. However, this study also shows that using drop shadows does not improve user’s object size estimation, which remains as a problematic element in grasping AR interaction literature

    Voiceye: A Multimodal Inclusive Development Environment

    Get PDF
    People with physical impairments who are unable to use traditional input devices (i.e. mouse and keyboard) are often excluded from technical professions (e.g. web development). Alternative input methods such as eye gaze tracking and speech recognition have become more readily available in recent years with both being explored independently to support people with physical impairments in coding activities. This paper describes a novel multimodal application (“Voiceye”) that combines voice input, gaze interaction, and mechanical switches as an alternative approach for writing code. The system was evaluated with non-disabled participants who have coding experience (N=29) to assess the feasibility of the application in writing HTML and CSS code. Results found that Voiceye was perceived positively and enabled successful completion of coding tasks. A follow-up study with disabled participants (N=5) demonstrated that this method of multimodal interaction can support people with physical impairments in writing and editing code

    A Review of Interaction Techniques for Immersive Environments

    Get PDF
    The recent proliferation of immersive technology has led to the rapid adoption of consumer-ready hardware for Augmented Reality (AR) and Virtual Reality (VR). While this increase has resulted in a variety of platforms that can offer a richer interactive experience, the advances in technology bring more variability in display types, interaction sensors and use cases. This provides a spectrum of device-specific interaction possibilities, with each offering a tailor-made solution for delivering immersive experiences to users, but often with an inherent lack of standardisation across devices and applications. To address this, a systematic review and an evaluation of explicit, task-based interaction methods in immersive environments are presented in this paper. A corpus of papers published between 2013 and 2020 is reviewed to thoroughly explore state-of-the-art user studies, which investigate input methods and their implementation for immersive interaction tasks (pointing, selection, translation, rotation, scale, viewport, menu-based and abstract). Focus is given to how input methods have been applied within the spectrum of immersive technology (AR, VR, XR). This is achieved by categorising findings based on display type, input method, study type, use case and task. Results illustrate key trends surrounding the benefits and limitations of each interaction technique and highlight the gaps in current research. The review provides a foundation for understanding the current and future directions for interaction studies in immersive environments, which, at this pivotal point in XR technology adoption, provides routes forward for achieving more valuable, intuitive and natural interactive experiences

    Freehand Grasping: An Analysis of Grasping for Docking Tasks in Virtual Reality

    Get PDF
    Natural and intuitive interaction in VR as grasping virtual objects, is still a significant challenge and while recent studies have begun to explore interactions that aim to seamlessly create virtual environments that mimic reality as closely as possible, the dexterous versatility of the human grasp poses significant challenges for usable and intuitive interactions. At present the design considerations for creating natural grasping based interactions in VR are usually drawn from the body of historical knowledge presented for real object grasping. While this may be suitable for some applications, recent work has shown that users in VR grasp virtual objects differently than they would grasp real objects. Therefore, these interaction assumptions may not be directly applicable in furthering the natural interface for users of VR, presenting an absence of knowledge on how users intuitively grasp virtual objects. To begin to address this, we present two experiments where participants (N=39) grasped 16 virtual objects categorised by shape in a mixed docking task exploring rotation, placement and target location. We report on a Wizard of Oz methodology and extract grasp types, grasp category and grasp dimension. We further provide insights into virtual object categorisation for assessing interaction patterns and how these could be used for developing natural and intuitive grasp models by parameterizing grasp types found in these experiments. Our results are of value to be taken forward into a framework of recommendations for grasping interactions and thus begin to bridge the gap in understanding natural grasping patters for VR object interactions

    Voice Snapping: Inclusive Speech Interaction Techniques for Creative Object Manipulation

    Get PDF
    Voice input holds significant potential to support people with physical impairments in producing creative visual design outputs, although it is unclear whether well-established interaction methods used for manipulating graphical assets within mainstream creative applications (typically operated via a mouse, keyboard, or touch input) also present benefits for speech interaction. We present three new voice controlled approaches utilizing interface snapping techniques for manipulating a graphical object’s dimensions: NoSnap, UserSnap, and AutoSnap. A user evaluation with people who have physical impairments (N=25) found that each method enabled participants to successfully control a graphical object’s size across a series of design tasks, although the automated snapping approach utilized within AutoSnap was found to be more efficient, accurate, and usable. Subjective feedback from participants also highlighted a strong preference for AutoSnap over the other techniques in terms of efficiency and ease of use

    Adaptive Tele-Therapies Based on Serious Games for Health for People with Time-Management and Organisational Problems: Preliminary Results

    Get PDF
    Attention Deficit with Hyperactivity Disorder (ADHD) is one of the most prevalent disorders within the child population today. Inattention problems can lead to greater difficulties in completing assignments, as well as problems with time management and prioritisation of tasks. This article presents an intelligent tele-therapy tool based on Serious Games for Health, aimed at the improvement of time management skills and the prioritisation of tasks. This tele-system is based on the use of decision trees within Django, a high-level Python Web framework. The technologies and techniques used were selected so as to boost user involvement and to enable the system to be easily customised. This article shows the preliminary results of the pilot-phase in an experiment performed to evaluate the use of adaptive tele-therapies within a group of typically developing children and adolescents aged between 12 and 19 years old without ADHD. To do so, we relied on the collection of parameters and the conduct of surveys for assessing time management skills, as well as measuring system usability and availability. The results of a time management survey highlighted that the users involved in the trial did not use any specific or effective time management techniques, scoring 1.98 and 2.30 out of 5 points in this area for ages under 15 and over 16 years old, respectively. The final calculations based on the usability questionnaire resulted in an average score of 78.75 out of 100. The creation of a customisable tool capable of working with different skills, in conjunction with the replication of the current study, may help to understand these users’ needs, as well as boosting time management skills among teenagers with and without ADHD

    Assessing Visual Attention Using Eye Tracking Sensors in Intelligent Cognitive Therapies Based on Serious Games

    Get PDF
    This study examines the use of eye tracking sensors as a means to identify children's behavior in attention-enhancement therapies. For this purpose, a set of data collected from 32 children with different attention skills is analyzed during their interaction with a set of puzzle games. The authors of this study hypothesize that participants with better performance may have quantifiably different eye-movement patterns from users with poorer results. The use of eye trackers outside the research community may help to extend their potential with available intelligent therapies, bringing state-of-the-art technologies to users. The use of gaze data constitutes a new information source in intelligent therapies that may help to build new approaches that are fully-customized to final users' needs. This may be achieved by implementing machine learning algorithms for classification. The initial study of the dataset has proven a 0.88 (±0.11) classification accuracy with a random forest classifier, using cross-validation and hierarchical tree-based feature selection. Further approaches need to be examined in order to establish more detailed attention behaviors and patterns among children with and without attention problems
    corecore